Instruction tuning, a new learning paradigm that fine-tunes pre-trained language models on tasks specified through instructions, has shown promising zero-shot performance on various natural language processing tasks. However, it's still not explored for vision and multimodal tasks. In this work, we introduce MultiInstruct, the first multimodal instruction tuning benchmark dataset that consists of 47 diverse multimodal tasks covering 11 broad categories. Each task is designed at least with 5,000 instances (input-out pairs) from existing open-source datasets and 5 expert-written instructions. We take OFA as the base pre-trained model for multimodal instruction tuning, and to improve its performance, we explore multiple transfer learning strategies to leverage the large-scale Natural Instructions dataset. Experimental results demonstrate its strong zero-shot performance on various unseen multimodal tasks and the benefit of transfer learning from text-only instructions. We also design a new evaluation metric: Sensitivity, to evaluate how sensitive the model is to the variety of instructions. Our results indicate that the model is less sensitive to the varying instructions after finetuning on a diverse set of tasks and instructions for each task.
translated by 谷歌翻译
The increasing privacy concerns on personal private text data promote the development of federated learning (FL) in recent years. However, the existing studies on applying FL in NLP are not suitable to coordinate participants with heterogeneous or private learning objectives. In this study, we further broaden the application scope of FL in NLP by proposing an Assign-Then-Contrast (denoted as ATC) framework, which enables clients with heterogeneous NLP tasks to construct an FL course and learn useful knowledge from each other. Specifically, the clients are suggested to first perform local training with the unified tasks assigned by the server rather than using their own learning objectives, which is called the Assign training stage. After that, in the Contrast training stage, clients train with different local learning objectives and exchange knowledge with other clients who contribute consistent and useful model updates. We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks, and the proposed ATC framework achieves significant improvements compared with various baseline methods. The source code is available at \url{https://github.com/alibaba/FederatedScope/tree/master/federatedscope/nlp/hetero_tasks}.
translated by 谷歌翻译
Open Relation Extraction (OpenRE) aims to discover novel relations from open domains. Previous OpenRE methods mainly suffer from two problems: (1) Insufficient capacity to discriminate between known and novel relations. When extending conventional test settings to a more general setting where test data might also come from seen classes, existing approaches have a significant performance decline. (2) Secondary labeling must be performed before practical application. Existing methods cannot label human-readable and meaningful types for novel relations, which is urgently required by the downstream tasks. To address these issues, we propose the Active Relation Discovery (ARD) framework, which utilizes relational outlier detection for discriminating known and novel relations and involves active learning for labeling novel relations. Extensive experiments on three real-world datasets show that ARD significantly outperforms previous state-of-the-art methods on both conventional and our proposed general OpenRE settings. The source code and datasets will be available for reproducibility.
translated by 谷歌翻译
尽管预训练的语言模型(LMS)在许多NLP任务中都取得了重大改进,但人们越来越关注探索LMS的能力并解释其预测。但是,现有作品通常仅着眼于某些下游任务的特定功能。缺乏直接评估蒙版单词预测性能和预训练LMS的解释性的数据集。为了填补空白,我们提出了一个新颖的评估基准,以提供英语和中文注释的数据。它在多个维度(即语法,语义,知识,推理和计算)中测试LMS能力。此外,它提供了满足足够和紧凑性的仔细注释的令牌级别的理由。它包含每个原始实例的扰动实例,以便将扰动下的基本原理一致性用作忠实的指标,即解释性的观点。我们在几个广泛使用的预训练的LMS上进行实验。结果表明,他们在知识和计算的维度上表现较差。而且它们在所有维度上的合理性远非令人满意,尤其是当理由缩短时。此外,我们评估的预训练的LMS在语法感知数据上并不强大。我们将以\ url {http:// xyz}发布此评估基准,并希望它可以促进预训练的LMS的研究进度。
translated by 谷歌翻译
实体集扩展(ESE)是一项有价值的任务,旨在找到给定种子实体描述的目标语义类别的实体。由于其发现知识的能力,各种NLP和下游应用程序都受益于ESE。尽管现有的引导方法取得了巨大进展,但其中大多数仍然依赖手动预定义的上下文模式。预定义的上下文模式的不可忽略的缺点是,它们不能灵活地推广到各种语义类别,我们将这种现象称为“语义敏感性”。为了解决这个问题,我们设计了一个上下文模式生成模块,该模块利用自回归语言模型(例如GPT-2)自动为实体生成高质量的上下文模式。此外,我们提出了GAPA,这是一种新型ESE框架,利用上述生成的模式扩展目标实体。对三个广泛使用的数据集进行了广泛的实验和详细分析,证明了我们方法的有效性。我们实验的所有代码都将用于可重复性。
translated by 谷歌翻译
通常通过过去的选择来告知机器学习中的评估,例如要使用哪些数据集或指标。该标准化可以使用排行榜对平等基础进行比较,但是随着出现更好的替代方案,评估选择变得不佳。这个问题在自然语言生成中尤其相关,该语言需要不断改善的数据集,指标和人类评估以提出确定性的主张。为了使遵循最佳模型评估实践更加容易,我们介绍了GEMV2。新版本的一代,评估和指标基准为数据集,模型和指标开发人员提供了模块化基础架构,以使彼此受益。GEMV2支持40种记录的数据集中51种语言。所有数据集的模型都可以在线评估,我们的交互式数据卡创建和渲染工具使得在Living Benchmark中添加新数据集变得更加容易。
translated by 谷歌翻译
社会科学的学术文献是记录人类文明并研究人类社会问题的文献。随着这种文献的大规模增长,快速找到有关相关问题的现有研究的方法已成为对研究人员的紧迫需求。先前的研究,例如SCIBERT,已经表明,使用特定领域的文本进行预训练可以改善这些领域中自然语言处理任务的性能。但是,没有针对社会科学的预训练的语言模型,因此本文提出了关于社会科学引文指数(SSCI)期刊上许多摘要的预培训模型。这些模型可在GitHub(https://github.com/s-t-full-text-knowledge-mining/ssci-bert)上获得,在学科分类和带有社会科学文学的抽象结构 - 功能识别任务方面表现出色。
translated by 谷歌翻译
最近的视频文本发现方法通常需要三个阶段的管道,即检测单个图像中的文本,识别本地化文本,跟踪文本流以及后处理以生成最终结果。这些方法通常遵循按匹配范式跟踪并开发复杂的管道。在本文中,植根于变压器序列建模,我们提出了一个简单但有效的端到端视频文本检测,跟踪和识别框架(TransDert)。转码主要包括两个优点:1)与相邻帧中的显式匹配范式不同,transdetr轨道和不同的匹配范围,并通过长期时间序列(超过7帧)隐含的不同查询所谓的文本查询隐式识别每个文本。 2)Transdetr是第一个端到端可训练的视频文本斑点框架,该框架同时介绍了三个子任务(例如,文本检测,跟踪,识别)。进行了四个视频文本数据集(即ICDAR2013视频,ICDAR2015视频,Minetto和YouTube视频文本)中的广泛实验,以证明Transdetr在预先的性能中达到了最大的表现,并且在视频文本发现任务方面的提高约为8.0%。 。可以在https://github.com/weijiawu/transdetr上找到Transdet的代码。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
弱监督对象本地化(WSOL)旨在仅通过使用图像级标签来学习对象本地化器。基于卷积神经网络(CNN)的技术通常导致突出显示物体的最辨别部分,同时忽略整个对象范围。最近,变压器架构已经部署到WSOL,以捕获具有自我关注机制和多层的Perceptron结构的远程特征依赖性。然而,变压器缺乏CNN所固有的局部感应偏差,因此可以恶化WSOL中的局部特征细节。在本文中,我们提出了一种基于变压器的新型框架,称为LCTR(局部连续性变压器),该框架被称为LCTR(局部连续性变压器),该框架在长期特征依赖项中提高全局特征的本地感知能力。为此,我们提出了一个关系的修补程序注意模块(RPAM),其考虑全球跨补丁信息。我们进一步设计了一个CUE挖掘模块(CDM),它利用本地特征来指导模型的学习趋势,以突出弱局部响应。最后,在两个广泛使用的数据集,即Cub-200-2011和ILSVRC上进行综合实验,以验证我们方法的有效性。
translated by 谷歌翻译